AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-modal Input

# Multi-modal Input

Ola Video
Apache-2.0
Ola-7B is a multi-modal language model jointly developed by Tencent, Tsinghua University, and Nanyang Technological University. Based on the Qwen2.5 architecture, it supports text, image, video, and audio inputs, with text content as output.
Safetensors Supports Multiple Languages
O
THUdyh
82
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase